Hoeffding ’ s Inequality 1 . 1 Approximate Counting by Random Sampling

نویسنده

  • Hubert Chan
چکیده

Suppose there is a bag containing red balls and blue balls. You would like to estimate the fraction of red balls in the bag. However, you are only allowed to sample randomly from the bag with replacement. The straightforward method is to make T random samples and use the fraction of red balls in the random samples as an estimate of the true fraction. We are interested in estimation with certain additive error , i.e., if the true fraction is p, it is enough to return a number in the range [p− , p+ ]. The question is: how many samples are enough so that with probability at least 1− δ, our estimate is within an additive error of > 0 from the true value?

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Computational and Statistical Learning Theory

{0, 1}-valued random variables X1, . . . , Xn are drawn independently each from Bernoulli distribution with parameter p = 0.1. Define Pn := P( 1 n ∑n i=1Xi ≤ 0.2). (a) For n = 1 to 30 calculate and plot the below in the same plot (see [1, section 6.1] for definition of Hoeffding and Bernstein inequalities): i. Exact value of Pn (binomial distribution). ii. Normal approximation for Pn. iii. Hoef...

متن کامل

Concentration Inequalities

1.1. Azuma-Hoeffding Inequality. Concentration inequalities are inequalities that bound probabilities of deviations by a random variable from its mean or median. Our interest will be in concentration inequalities in which the deviation probabilities decay exponentially or superexponentially in the distance from the mean. One of the most basic such inequality is the Azuma-Hoeffding inequality fo...

متن کامل

Further Optimal Regret Bounds for Thompson Sampling

The second last inequality follows from the observation that the event E i (t) was defined as μ̂i(t) > xi, At time τk+1 for k ≥ 1, μ̂i(τk+1) = Si(τk+1) k+1 ≤ Si(τk+1) k , where latter is simply the average of the outcomes observed from k i.i.d. plays of arm i, each of which is a Bernoulli trial with mean μi. Using Chernoff-Hoeffding bounds (Fact 1), we obtain that Pr(μ̂i(τk + 1) > xi) ≤ Pr(ik k > ...

متن کامل

Constructive Proofs of Concentration Bounds

We give a simple combinatorial proof of the Chernoff-Hoeffding concentration bound [Che52, Hoe63], which says that the sum of independent {0, 1}-valued random variables is highly concentrated around the expected value. Unlike the standard proofs, our proof does not use the method of higher moments, but rather uses a simple and intuitive counting argument. In addition, our proof is constructive ...

متن کامل

Material for ” Combinatorial multi - armed bandit

We use the following two well known bounds in our proofs. Lemma 1 (Chernoff-Hoeffding bound). Let X1, · · · , Xn be random variables with common support [0, 1] and E[Xi] = μ. Let Sn = X1 + · · ·+Xn. Then for all t ≥ 0, Pr[Sn ≥ nμ+ t] ≤ e−2t /n and Pr[Sn ≤ nμ− t] ≤ e−2t /n Lemma 2 (Bernstein inequality). Let X1, . . . , Xn be independent zero-mean random variables. If for all 1 ≤ i ≤ n, |Xi| ≤ k...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012